79 research outputs found

    PERFORMANCE CHARACTERISATION OF IP NETWORKS

    Get PDF
    The initial rapid expansion of the Internet, in terms of complexity and number of hosts, was followed by an increased interest in its overall parameters and the quality the network offers. This growth has led, in the first instance, to extensive research in the area of network monitoring, in order to better understand the characteristics of the current Internet. In parallel, studies were made in the area of protocol performance modelling, aiming to estimate the performance of various Internet applications. A key goal of this research project was the analysis of current Internet traffic performance from a dual perspective: monitoring and prediction. In order to achieve this, the study has three main phases. It starts by describing the relationship between data transfer performance and network conditions, a relationship that proves to be critical when studying application performance. The next phase proposes a novel architecture of inferring network conditions and transfer parameters using captured traffic analysis. The final phase describes a novel alternative to current TCP (Transmission Control Protocol) models, which provides the relationship between network, data transfer, and client characteristics on one side, and the resulting TCP performance on the other, while accounting for the features of current Internet transfers. The proposed inference analysis method for network and transfer parameters uses online nonintrusive monitoring of captured traffic from a single point. This technique overcomes limitations of prior approaches that are typically geared towards intrusive and/or dual-point offline analysis. The method includes several novel aspects, such as TCP timestamp analysis, which allows bottleneck bandwidth inference and more accurate receiver-based parameter measurement, which are not possible using traditional acknowledgment-based inference. The the results of the traffic analysis determine the location of the eventual degradations in network conditions relative to the position of the monitoring point. The proposed monitoring framework infers the performance parameters of network paths conditions transited by the analysed traffic, subject to the position of the monitoring point, and it can be used as a starting point in pro-active network management. The TCP performance prediction model is based on the observation that current, potentially unknown, TCP implementations, as well as connection characteristics, are too complex for a mathematical model. The model proposed in this thesis uses an artificial intelligence-based analysis method to establish the relationship between the parameters that influence the evolution of the TCP transfers and the resulting performance of those transfers. Based on preliminary tests of classification and function approximation algorithms, a neural network analysis approach was preferred due to its prediction accuracy. Both the monitoring method and the prediction model are validated using a combination of traffic traces, ranging from synthetic transfers / environments, produced using a network simulator/emulator, to traces produced using a script-based, controlled client and uncontrolled traces, both using real Internet traffic. The validation tests indicate that the proposed approaches provide better accuracy in terms of inferring network conditions and predicting transfer performance in comparison with previous methods. The non-intrusive analysis of the real network traces provides comprehensive information on the current Internet characteristics, indicating low-loss, low-delay, and high-bottleneck bandwidth conditions for the majority of the studied paths. Overall, this study provides a method for inferring the characteristics of Internet paths based on traffic analysis, an efficient methodology for predicting TCP transfer performance, and a firm basis for future research in the areas of traffic analysis and performance modelling

    Location based transmission using a neighbour aware-cross layer MAC for ad hoc networks

    Get PDF
    In a typical Ad Hoc network, mobile nodes have scarce shared bandwidth and limited battery life resources, so optimizing the resource and enhancing the overall network performance is the ultimate aim in such network. This paper proposes anew cross layer MAC algorithm called Location Based Transmission using a Neighbour Aware ā€“ Cross Layer MAC (LBT-NA Cross Layer MAC) that aims to reduce the transmission power when communicating with the intended receiver by exchanging location information between nodes in one hand and on the other hand the MAC uses a new random backoff values, which is based on the number of active neighbour nodes, unlike the standard IEEE 802.11 series where a random backoff value is chosen from a fixed range of 0-31. The validation test demonstrates that the proposed algorithm increases battery life, increases spatial reuse and enhances the network performance

    A machine-learning approach to Detect users' suspicious behaviour through the Facebook wall

    Full text link
    Facebook represents the current de-facto choice for social media, changing the nature of social relationships. The increasing amount of personal information that runs through this platform publicly exposes user behaviour and social trends, allowing aggregation of data through conventional intelligence collection techniques such as OSINT (Open Source Intelligence). In this paper, we propose a new method to detect and diagnose variations in overall Facebook user psychology through Open Source Intelligence (OSINT) and machine learning techniques. We are aggregating the spectrum of user sentiments and views by using N-Games charts, which exhibit noticeable variations over time, validated through long term collection. We postulate that the proposed approach can be used by security organisations to understand and evaluate the user psychology, then use the information to predict insider threats or prevent insider attacks.Comment: 8 page

    Agent-based Vs Agent-less Sandbox for Dynamic Behavioral Analysis

    Get PDF
    Malicious software is detected and classified by either static analysis or dynamic analysis. In static analysis, malware samples are reverse engineered and analyzed so that signatures of malware can be constructed. These techniques can be easily thwarted through polymorphic, metamorphic malware, obfuscation and packing techniques, whereas in dynamic analysis malware samples are executed in a controlled environment using the sandboxing technique, in order to model the behavior of malware. In this paper, we have analyzed Petya, Spyeye, VolatileCedar, PAFISH etc. through Agent-based and Agentless dynamic sandbox systems in order to investigate and benchmark their efficiency in advanced malware detection

    A blockchain secured pharmaceutical distribution system to fight counterfeiting

    Get PDF
    Counterfeiting drugs has been a global concern for years. Considering the lack of transparency within the current pharmaceutical distribution system, research has shown that blockchain technology is a promising solution for an improved supply chain system. This study aims to explore the current solution proposals for distribution systems using blockchain technology. Based on a literature review on currently proposed solutions, it is identified that the secrecy of the data within the system and nodesā€™ reputation in decision making has not been considered. The proposed prototype uses a zero-knowledge proof protocol to ensure the integrity of the distributed data. It uses the Markov model to track each nodeā€™s ā€˜reputation scoreā€™ based on their interactions to predict the reliability of the nodes in consensus decision making. Analysis of the prototype demonstrates a reliable method in decision making, which concludes with overall improvements in the systemā€™s confidentiality, integrity, and availability. The result indicates that the decision protocol must be significantly considered in a reliable distribution system. It is recommended that the pharmaceutical distribution systems adopt a relevant protocol to design their blockchain solution. Continuous research is required further to increase performance and reliability within blockchain distribution systems

    Medical Systems Data Security and Biometric Authentication in Public Cloud Servers

    Get PDF
    Advances in distributed computing and virtualization allowed cloud computing to establish itself as a popular data management and storage option for organizations. However, unclear safeguards, practices, as well as the evolution of legislation around privacy and data protection, contribute to data security being one of the main concerns in adopting this paradigm. Another important aspect hindering the absolute success of cloud computing is the ability to ensure the digital identity of users and protect the virtual environment through logical access controls while avoiding the compromise of its authentication mechanism or storage medium. Therefore, this paper proposes a system that addresses data security wherein unauthorized access to data stored in a public cloud is prevented by applying a fragmentation technique and a NoSQL database. Moreover, a system for managing and authenticating users with multimodal biometrics is also suggested along with a mechanism to ensure the protection of biometric features. When compared with encryption, the proposed fragmentation method indicates better latency performance, highlighting its strong potential use-case in environments with lower latency requirements such as the healthcare IT infrastructure

    White-box compression: Learning and exploiting compact table representations

    Get PDF
    We formulate a conceptual model for white-box compression, which represents the logical columns in tabular data as an openly deļ¬ned function over some actually stored physical columns. Each block of data should thus go accompanied by a header that describes this functional mapping. Because these compression functions are openly deļ¬ned, database systems can exploit them using query optimization and during execution, enabling e.g. better ļ¬lter predicate pushdown. In addition, we show that white-box compression is able to identify a broad variety of new opportunities for compression, leading to much better compression factors. These opportunities are identiļ¬ed using an automatic learning process that learns the functions from the data. We provide a recursive pattern-driven algorithm for such learning. Finally, we demonstrate the effectiveness of white-box compression on a new benchmark we contribute hereby: the Public BI benchmark provides a rich set of real-world datasets.We believe our basic prototype for white-box compression opens the way for future research into transparent compressed data representations on the one hand and database system architectures that can eļ¬ƒciently exploit these on the other, and should be seen as another step into the direction of data management systems that are self-learning and optimize themselves for the data they are deployed on.</p

    Dynamic neighbour aware power-controlled MAC for multi-hop ad hoc networks

    Get PDF
    In Ad Hoc networks, resources in terms of bandwidth and battery life are limited; so using a fixed high transmission power limits the durability of a battery life and causes unnecessary high interference while communicating with closer nodes leading to lower overall network throughput. Thus, this paper proposes a new cross layer MAC called Dynamic Neighbour Aware Power-controlled MAC (Dynamic NA -PMAC) for multi-hop Ad Hoc networks that adjust the transmission power by estimating the communication distance based on the overheard signal strength. By dynamically controlling the transmission power based on the receivable signal strength, the probability of concurrent transmission, durability of battery life and bandwidth utilization increases. Moreover, in presence of multiple overlapping signals with different strengths, an optimal transmission power is estimated dynamically to maintain fairness and avoid hidden node issues at the same time. In a given area, since power is controlled, the chances of overlapping the sensing ranges of sources and next hop relay nodes or destination node decreases, so it enhances the probability of concurrent transmission and hence an increased overall throughput. In addition, this paper uses a variable backoff algorithm based on the number of active neighbours, which saves energy and increases throughput when the density of active neighbours is less. The designed mechanism is tested with various random network scenarios using different traffic including CBR, Exponential and TCP in both scenarios (stationary and mobile with high speed) for single as well as multi-hop. Moreover, the proposed model is benchmarked against two variants of power-controlled mechanisms namely Min NA-PMAC and MaxRC-MinDA NA-PMAC to prove that using a fixed minimum transmission power may lead to unfair channel access and using different transmission power for RTS/CTS and Data/ACK leads to lower probability of concurrent transmission respectively
    • ā€¦
    corecore